55 research outputs found

    On Optimal Anticodes over Permutations with the Infinity Norm

    Full text link
    Motivated by the set-antiset method for codes over permutations under the infinity norm, we study anticodes under this metric. For half of the parameter range we classify all the optimal anticodes, which is equivalent to finding the maximum permanent of certain (0,1)(0,1)-matrices. For the rest of the cases we show constraints on the structure of optimal anticodes

    A family of optimal locally recoverable codes

    Full text link
    A code over a finite alphabet is called locally recoverable (LRC) if every symbol in the encoding is a function of a small number (at most rr) other symbols. We present a family of LRC codes that attain the maximum possible value of the distance for a given locality parameter and code cardinality. The codewords are obtained as evaluations of specially constructed polynomials over a finite field, and reduce to a Reed-Solomon code if the locality parameter rr is set to be equal to the code dimension. The size of the code alphabet for most parameters is only slightly greater than the code length. The recovery procedure is performed by polynomial interpolation over rr points. We also construct codes with several disjoint recovering sets for every symbol. This construction enables the system to conduct several independent and simultaneous recovery processes of a specific symbol by accessing different parts of the codeword. This property enables high availability of frequently accessed data ("hot data").Comment: Minor changes. This is the final published version of the pape

    Long MDS Codes for Optimal Repair Bandwidth

    Get PDF
    MDS codes are erasure-correcting codes that can correct the maximum number of erasures given the number of redundancy or parity symbols. If an MDS code has r parities and no more than r erasures occur, then by transmitting all the remaining data in the code one can recover the original information. However, it was shown that in order to recover a single symbol erasure, only a fraction of 1/r of the information needs to be transmitted. This fraction is called the repair bandwidth (fraction). Explicit code constructions were given in previous works. If we view each symbol in the code as a vector or a column, then the code forms a 2D array and such codes are especially widely used in storage systems. In this paper, we ask the following question: given the length of the column l, can we construct high-rate MDS array codes with optimal repair bandwidth of 1/r, whose code length is as long as possible? In this paper, we give code constructions such that the code length is (r + 1)log_r l

    Explicit MDS Codes for Optimal Repair Bandwidth

    Get PDF
    MDS codes are erasure-correcting codes that can correct the maximum number of erasures for a given number of redundancy or parity symbols. If an MDS code has rr parities and no more than rr erasures occur, then by transmitting all the remaining data in the code, the original information can be recovered. However, it was shown that in order to recover a single symbol erasure, only a fraction of 1/r1/r of the information needs to be transmitted. This fraction is called the repair bandwidth (fraction). Explicit code constructions were given in previous works. If we view each symbol in the code as a vector or a column over some field, then the code forms a 2D array and such codes are especially widely used in storage systems. In this paper, we address the following question: given the length of the column ll, number of parities rr, can we construct high-rate MDS array codes with optimal repair bandwidth of 1/r1/r, whose code length is as long as possible? In this paper, we give code constructions such that the code length is (r+1)logrl(r+1)\log_r l.Comment: 17 page

    MDS Array Codes with Optimal Rebuilding

    Get PDF
    MDS array codes are widely used in storage systems to protect data against erasures. We address the rebuilding ratio problem, namely, in the case of erasures, what is the the fraction of the remaining information that needs to be accessed in order to rebuild exactly the lost information? It is clear that when the number of erasures equals the maximum number of erasures that an MDS code can correct then the rebuilding ratio is 1 (access all the remaining information). However, the interesting (and more practical) case is when the number of erasures is smaller than the erasure correcting capability of the code. For example, consider an MDS code that can correct two erasures: What is the smallest amount of information that one needs to access in order to correct a single erasure? Previous work showed that the rebuilding ratio is bounded between 1/2 and 3/4 , however, the exact value was left as an open problem. In this paper, we solve this open problem and prove that for the case of a single erasure with a 2-erasure correcting code, the rebuilding ratio is 1/2 . In general, we construct a new family of r-erasure correcting MDS array codes that has optimal rebuilding ratio of 1/r in the case of a single erasure. Our array codes have efficient encoding and decoding algorithms (for the case r = 2 they use a finite field of size 3) and an optimal update property

    Access vs. Bandwidth in Codes for Storage

    Get PDF
    Maximum distance separable (MDS) codes are widely used in storage systems to protect against disk (node) failures. A node is said to have capacity ll over some field F\mathbb{F}, if it can store that amount of symbols of the field. An (n,k,l)(n,k,l) MDS code uses nn nodes of capacity ll to store kk information nodes. The MDS property guarantees the resiliency to any nkn-k node failures. An \emph{optimal bandwidth} (resp. \emph{optimal access}) MDS code communicates (resp. accesses) the minimum amount of data during the repair process of a single failed node. It was shown that this amount equals a fraction of 1/(nk)1/(n-k) of data stored in each node. In previous optimal bandwidth constructions, ll scaled polynomially with kk in codes with asymptotic rate <1<1. Moreover, in constructions with a constant number of parities, i.e. rate approaches 1, ll is scaled exponentially w.r.t. kk. In this paper, we focus on the later case of constant number of parities nk=rn-k=r, and ask the following question: Given the capacity of a node ll what is the largest number of information disks kk in an optimal bandwidth (resp. access) (k+r,k,l)(k+r,k,l) MDS code. We give an upper bound for the general case, and two tight bounds in the special cases of two important families of codes. Moreover, the bounds show that in some cases optimal-bandwidth code has larger kk than optimal-access code, and therefore these two measures are not equivalent.Comment: This paper was presented in part at the IEEE International Symposium on Information Theory (ISIT 2012). submitted to IEEE transactions on information theor
    corecore